The EU Artificial Intelligence Act - Part 1
Using the "PDF Summary" GPT, I have uploaded the latest PDF version of the EU Artificial Intelligence act which is a 272 page document. The following article is a list of questions and answers generated from ChatGPT 4.0 and the PDF Summary add-on. This article does not constitute legal advice and may contain inaccuracies as a result of the training models that have been used.
Tim: I am going to ask questions about this document. Please keep your answers to one paragraph.
PDF Summary: Great, I'm ready to assist you with questions about the document. What would you like to know?
Tim: What is the name of the new regulation?
PDF Summary: The new regulation is named the "Artificial Intelligence Act." This regulation establishes harmonized rules on artificial intelligence and amends certain Union legislative acts.
Tim: Who published this regulation?
PDF Summary: This regulation was published by the Council of the European Union.
Tim: Who does the regulation apply to?
PDF Summary: The Artificial Intelligence Act applies to providers of AI systems, irrespective of whether they are established within the European Union or in a third country, as well as to deployers of AI systems established within the European Union. It also extends to providers and deployers of AI systems established in third countries, in cases where the output produced by those systems is intended for use within the European Union. The regulation ensures that these rules apply in a non-discriminatory manner to all relevant operators, covering a wide range of entities involved in the development, distribution, and deployment of AI systems.
Tim: When does the regulation come into force?
PDF Summary: The compromise agreement for the Artificial Intelligence Act specifies that most parts of the Regulation will come into force 24 months after its adoption. However, there are slightly shorter deadlines for some elements: prohibitions will become applicable in 6 months, and provisions concerning notifying authorities and notified bodies, governance, general purpose AI models, confidentiality, and penalties will become applicable in 12 months. A slightly longer deadline of 36 months is set for high-risk AI systems covered by Annex II.
Tim: What are the main components of the act?
PDF Summary: The main components of the Artificial Intelligence Act include:
- Subject Matter and Scope: The act includes a high-level statement emphasizing the protection of health, safety, and fundamental rights. It explicitly excludes national security from its scope.
- Definition of an AI System: The act aligns its definition of an AI system with international standards, particularly those of the OECD, and clarifies what is not covered under this definition.
- Prohibited AI Practices: It lists specific AI practices that are prohibited, including real-time biometric identification by law enforcement in public spaces (with certain exceptions), emotion recognition in workplaces and educational institutions, and predictive policing under certain conditions.
- High-Risk AI Systems: It introduces a classification for high-risk AI systems, including additional safeguards and transparency measures for law enforcement use of post-remote biometric identification.
- Exceptions for Law Enforcement Authorities: The act includes exceptions for law enforcement, like derogations from conformity assessment and authorization for real-world testing of high-risk AI systems.
- Fundamental Rights Impact Assessment: Some deployers are obliged to conduct a fundamental rights impact assessment, especially for high-risk AI systems in specific sectors.
- Testing High-Risk AI Systems: Provisions are included for testing high-risk AI systems in real-world conditions outside regulatory sandboxes.
- General Purpose AI Models: New rules introduce obligations for general purpose AI models, including documentation requirements, risk assessments, and compliance measures.
- Governance and Enforcement: The act establishes new governance structures like the AI Office for oversight and enforcement, particularly for general purpose AI models.
- Derogation from Conformity Assessment: The act includes provisions allowing derogation from conformity assessment under certain conditions.
- Existing AI Systems: It sets transitional periods for existing AI systems to comply with the new regulations.
- Implementing and Delegated Acts: The act includes provisions for updating and adjusting certain elements through delegated or implementing acts.
- Penalties: It outlines penalties for non-compliance with different aspects of the act.
- Entry into Application: It specifies the timelines for the regulation's different components to come into effect.
This structure reflects the act's comprehensive approach to regulating AI, balancing innovation with the need to protect public interests and fundamental rights.
Tim: Describe the types of penalties that can apply.
PDF Summary: The Artificial Intelligence Act stipulates various penalties for infringements of its provisions. The penalties are differentiated based on the nature of the infringement:
- For Non-Compliance with Prohibited AI Practices: The penalty for not complying with the prohibitions outlined in Article 5, specifically related to prohibited AI practices, is set at either 35 million EUR or 7% of the annual turnover, whichever is higher. This amount is slightly above the limit set in the Council’s revised mandate, which was 35 million EUR or 6.5% of annual turnover.
- For Providers of General Purpose AI Models: In cases of infringements by providers of general purpose AI models, the fines have been aligned with those for providers of high-risk AI systems. Notably, there is an additional grace period for providers of general purpose AI models, as no fines can be imposed during the first year after the rules come into application.
- For Other Infringements: The amounts of penalties for other infringements are slightly lower than the penalties for non-compliance with prohibited AI practices. However, the document does not specify the exact amounts for these other categories of infringements.
It is important to note that the penalty framework is designed to ensure compliance with the regulation and to deter entities from engaging in practices that could harm individuals or society through the misuse of AI technology.
Tim: What is the definition of a high-risk system. Are examples provided?
PDF Summary: The definition of a high-risk AI system in the Artificial Intelligence Act is detailed and involves several criteria. However, the document does not provide the complete definition in the sections available. Generally, high-risk AI systems are those that present significant risks to the health, safety, or fundamental rights of persons. These risks are assessed based on the intended purpose of the system and its use in specific contexts that are critical or sensitive due to their potential impact on individuals and society.
The document does mention that additional systems have been added to Annex III, which is the list of high-risk AI systems. This includes biometric categorisation systems and emotion recognition systems, as well as post-remote biometric identification systems, subject to certain scope limitations. However, for a comprehensive understanding of what constitutes a high-risk AI system, including specific examples, the complete list in Annex III and the full definition would need to be consulted. This detailed information was not included in the provided sections of the document.
To be continued
Comments ()