The AI Act was finally passed by the EU Parliament three weeks ago – what happens now?
On Wednesday, 13.03.2024, the EU AI Act was finally and officially passed. This is a critically important achievement for the future of technology, copyrights and a much-needed piece of legislation given the numerous and creative ways of using AI technology in all sectors. The AI Act regulation (partially) answers the legal and practical questions the industry has been pondering. The extent of the restrictions and relevant sanctions provides clarity and hopefully will unlock development and financing of AI tech.
The AI act regulates the development and deployment of AI technologies and provides clear guidelines and obligations regarding specific usage of AI. The aim is to have a balance between proper protection against AI usage risks and ensure the future development of AI based technologies, in a safe and cautionary manner.
The EU actively encourages AI development
The European AI Office, situated within the European Commission, is designed to become the hub of AI proficiency throughout the EU. Its primary functions will include directing the implementation of the AI Act, particularly concerning general-purpose AI, nurturing the growth and adoption of reliable AI, and fostering international collaboration.
The EU is actively promoting the development of AI-based technology through various measures:
- Providing financial support via programs like Horizon Europe and Digital Europe, with a projected investment of around €4 billion until 2027, specifically dedicated to generative AI.
- Implementing initiatives to bolster the EU’s generative AI talent pool through education, training, and reskilling activities.
- Encouraging both public and private investments in AI start-ups and scale-ups, including through venture capital and equity support, facilitated by programs like the EIC Accelerator Programme and InvestEU.
- Accelerating the establishment and utilization of Common European Data Spaces, vital for training and enhancing AI models, with a new Staff Working Document published to provide updated information on this endeavor.
- Launching the ‘GenAI4EU’ initiative, designed to foster the development of innovative use cases and emerging applications across various industrial sectors and the public sector, including areas such as robotics, health, manufacturing, mobility, climate, and virtual environments.
The AI Act in a nutshell
The main purposes of the AI Act are to:
- address risks specifically created by AI applications;
- prohibit AI practices that pose unacceptable risks;
- determine and clarify a list of high-risk applications of AI technologies;
- set clear requirements that are to be met by AI systems used for high-risk applications;
- define specific obligations imposed on developers and providers of high-risk AI applications;
- set a required conformity assessment before an AI system becomes operational or is made available to the public;
- ensure enforceability measures for a given AI system already placed on the market;
- establish a governance structure at a European and national level.
Firstly, AI is legally defined as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The most important step in this groundbreaking regulation was to establish the criteria for assessing the risks associated with the development and deployment of AI technologies as well as to carefully assess the associated potential risks.
The AI Act evaluates the implications of AI technology from a perspective focused on risk assessment. As a result, the AI Act defines four levels determined by the potential threat associated with the utilization of AI technologies.
- Unacceptable risk;
- High risk;
- Limited risk;
- Minimal risk.
As we have seen lately, the usage and development of AI technologies has started to take momentum throughout the world impacting most industries, thus an EU regulation ensuring a harmonized legal framework was very much needed.
The Act specifically aims to regulate Unacceptable and High-risk AI systems while also offering a clear classification that can be used to assess this type of technology. Perhaps the AI Act will bring some stability and clarity to the AI developing community and further motivate investments in this field, as technological progress and strive is still encouraged at an EU level. General Purpose AI now has a clear legal framework under which it can operate, and compliance measures can be subject to early adoption. On the other side of the AI spectrum, we encounter the unacceptable risk and high risk AI systems.
Al systems classified as an Unacceptable risk are prohibited.
This category includes the systems that pose a critical social threat to people, their rights and any social aspect.
Thus, AI systems that have the potential to use subliminal, manipulative, or deceptive techniques to distort behavior and impede or hamper decision making, as well as systems that are capable to exploit vulnerabilities in order to distort normal behavior constitute a significant harm risk. This category comprises of different types and situations that can directly and even permanently affect human life.
The prospective ramifications associated with AI systems deemed to pose unacceptable risks evoke comparisons to Isaac Asimov’s “I, Robot” and his exploration of the intricacies of human motives and emotions. Perhaps prior to the AI Act both development and investment in AI tech have been hindered by this type of contemplation on the potential peril posed by AI analyzing human behavior and subsequently manipulating, deceiving, or causing harm based on gathered data. AI is no longer a sci-fi topic, it’s an inevitable part of our everyday reality, now regulated under AI Act which implements effective safeguards to protect rights and freedoms of individuals.
The constraints outlined in the AI Act, coupled with worldwide initiatives, establish a framework aimed at stopping misuse while fostering technological advancement. On March 21 the United Nations General Assembly unanimously endorsed the inaugural global resolution on artificial intelligence. This resolution urges nations to safeguard human rights, preserve personal data integrity, and institute monitoring mechanisms to identify AI-related risks. Proposed by the United States and supported by China and over 120 other countries, this non-binding resolution also advocates for enhanced privacy protocols.
The AI act regulates very specific exceptions on the use of Unacceptable risk AI.
The best example is ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement. Under very specific and strict conditions, such systems can be used especially when not using such a tool may cause more harm than the risk, and still must ensure protection of the persons’ rights and freedoms. The three situations that allow this scenario are:
- searching for missing persons, abduction victims, and people who have been subject to human trafficking or sexual exploitation;
- preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
- identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organized crime, and environmental crime, etc.).
Although we understand the rationing behind these specific provisions, some may say that the simple existence and development of these systems may still pose too high a risk.
High risk AI systems can only be used after meeting very specific and strict requirements.
This category comprises of systems and applications that must undertake and implement several safety measures, conformity assessments and obtain different authorizations, evaluated on a case-by-case basis related to that particular system, before deployment.
As a general rule, an AI system will be construed as high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behavior, location or movement. The key difference between High risk and Unacceptable consists in the capability of the system to take advantage and freely exploit this type of data.
High-risk AI systems are those used as a safety component or products covered by EU laws, listed in Annex III to the AI Act.
For High-risk AI systems, the regulation provides clear instructions and requirements to be complied with by the respective deployers, for example such systems must undergo a third-party conformity test as described in Annex III to the AI Act.
These requirements generally consist in several risk and quality management systems and specific compliance measures, such as:
- conducting data governance;
- providing technical documentation to demonstrate compliance and to provide authorities with the necessary data in order to assess said compliance;
- implementing several mandatory design features, for instance record-keeping capabilities, the ability for deployers to implement human oversight and the system must achieve appropriate levels of accuracy, robustness and cybersecurity.
The different types of High-risk AI systems are clearly and specifically described, along with their possible uses, in Annex III to the AI Act. The annex also provides the categories of systems that although are considered high-risk can be deployed if specific requirements are met. Such is the case for some non-banned biometric systems as remote biometric identification systems, excluding (i) biometric verification that confirm a person is who they claim to be, (ii) biometric categorization systems inferring sensitive or protected attributes or characteristics and (iii)emotion recognition systems.
Limited Risk AI
Regarding the category of limited risk, the Act imposes specific transparency requirements to ensure adequate awareness when AI systems are used. Deployers must also ensure that AI-generated content, such as text, images, or audio-video content, are clearly identifiable as such (usage of AI must be transparently disclosed). Moreover, various other criteria must be satisfied for AI-generated content intended for widespread public use.
Minimal Risk AI
As for the final category, which is the most commonly encountered, minimal-risk AI systems are permitted without significant conditions. Although, in the future, various codes of conduct will probably be implemented concerning low-risk AI. These systems encompass applications such as AI-powered entertainment apps, video games, and spam filters. Currently, the majority of AI systems utilized in the EU fall into this category.
General Purpose AI (GPAI)
GPAI models represent a slightly different category than the main 4 described before, as they are not strictly standalone AI systems but rather serve as models or foundational elements. The AI act describes GPAI as an AI model that, among others, can perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.
The obligations of GPAI models’ deployers are considerably less strict than those regulated for AI systems providers. Most of the requirements consist of transparency related obligations, more similar to the limited risk categories. Generally, deployers must ensure updated technical documentation and make it available to the EU AI Office as well as national regulators.
Detailed technical documentation, training information and instructions must also be made available to the downstream deployers that would want to include the model in their own AI system. The AI Office will release a summary template that will include the essential information that must be included for this documentation, such as the main data sets that were used in the training of the model as well as explanations about the rationale behind the training steps.
Main general requirements that are to be met by GPAI model providers:
- present technical documentation, including details about the training and testing process as well as the evaluation results;
- provide information and documentation for the subsequent deployers that intend to integrate the respective GPAI model in other AI systems, in order for the latter system to be capable to understand the capabilities and limitations and to be able to comply with the AI Act
- establish separate policies to comply with the Copyright Directive;
- publish detailed descriptions about the contents and data used for the training of the GPAI.
As for an enforcement Timeline, compliance with the AI Act will need to be assured rather quickly by providers, as the AI Act will apply for compliance related matters:
- 6 months after entry into force for prohibited AI Systems;
- 12 months after entry into force for GPAI;
- 24 months after entry into force for High-Risk AI systems specifically listed under Annex III;
- 36 months after entry into force for High-Risk AI systems not listed under Annex III, but which are intended to be used as a safety component of a product, or if the AI is a product itself that must undergo conformity assessments under existing specific EU laws (i.e. toys, radio equipment, etc.).
As for a general timeline, the AI Act will entry into force 20 days after publication in the Official Journal of the EU, which is expected in May or June 2024. The Act will then entry into application 24 months after the entry into force date, except for the specific provisions.
As far as sanctions go, the AI Act sets out a strict noncompliance regime.
- For the breach of AI Act prohibitions, the fines amount to €35 million or 7% of total worldwide turnover (whichever is higher);
- For noncompliance with the obligations set out for providers of high-risk AI systems or GPAI models, the fines amount to €15 million or 3% of total worldwide annual turnover (whichever is higher);
- For the supply of incorrect or misleading information to the notified bodies or national competent authorities in reply to an official request the fines amount to €7.5 million or 1.5% of total worldwide annual turnover (whichever is higher);
However, it’s worth mentioning that, with respect to small and medium enterprises, the lower amount of the 2 options will be chosen. This fact proves that there is a balance to be maintained between carefully regulating AI technology and the ability to further develop it. If the legal dispositions are complied with, the future development of this technology will not be impeded.
At a national level, on 19 March, the legislative proposal on AI was registered for debate in the Senate. Although the Romanian legislator is keen to provide additional clarifications and introduce specific notions adapted to the Romanian tech market, the law draft is still in its very early stages, and we expect will suffer multiple amendments before being enacted.
Furthermore, the Ministry of Research, Innovation and Digitization (MCID) launched in public debate the draft Government Decision on the approval of the National Strategy for Artificial Intelligence.
As the adoption of the EU AI Act is a major milestone and achievement for establishing a wide-ranging legal framework to oversee the development and usage of AI systems in this part of the Digital Era, secondary legislation and guidelines will certainly be further introduced over time. Guiding materials have already started to emerge, such as The European Commission’s Ethics guidelines for trustworthy AI.
For now, the EU has taken a critically important first step towards regulating this rapidly developing wave of technology.
For any other information on this, please feel free to contact Elena Stan (Managing Associate) and Stefan Necula (Associate).