Novelties in EU Legislation: Artificial Intelligence Act

29 March 2024

During its plenary session on March 13, 2024, the European Parliament approved the Artificial Intelligence Act, with which the European Union aims to be the first to legislatively regulate this rapidly evolving field, that can effectively contribute to addressing numerous societal challenges, while also bringing previously unknown risks. The main objectives of the proposed act are thus to ensure the safety of citizens, respect fundamental rights, democracy, and ethical principles, while on the other hand, promote innovation and research in the field of artificial intelligence.

The existing legislation, at the European Union level and at the national level of individual member state, does provide certain protection. It is, however, insufficient, due to the rapid development of artificial intelligence systems and the specific challenges they bring. That is why the European Union began its efforts to establish a unified regulatory framework in 2020, with the aim of positioning Europe as a leader in technology and artificial intelligence. At that time, the White Paper on Artificial Intelligence was published, a non-legally binding document outlining the anticipated adoption of European legislation concerning the development and use of artificial intelligence systems. In April 2021, the European Commission submitted a legislative proposal for the adoption of the Artificial Intelligence Act. By December 2023, the proposal had been agreed in negotiations with member states, and in March 2024, as mentioned, it was already approved by the European Parliament. The act now only needs to be formally endorsed by the EU Council.

The new act specifies that artificial intelligence must fulfill certain obligations, depending on the level of risk it poses and the extent of its impact (the so-called risk-based approach). Artificial intelligence systems will thus be categorized into those representing 1. unacceptable risk, 2. high risk, 3. limited risk, and 4. minimal risk. Different levels of risk will entail varying degrees of regulation.


Artificial intelligence systems falling within this category will be prohibited in the European Union. These are applications that are contrary to EU values, pose an obvious threat to security and to the rights of citizens. Examples include biometric identification and categorization systems (including facial recognition), systems enabling so-called social scoring, systems used for manipulating human behavior, and those exploiting vulnerable groups of people (such as disabled, children, etc.).


Artificial intelligence systems identified as high risk will be allowed on the EU market, however, they will have to meet certain (strict) requirements. Before they will be placed on the market, their compliance will have to be established.

High risk systems include artificial intelligence technology used in critical infrastructure, which could endanger lives and health of EU citizens (e.g. transport), in education or professional training (e.g. scoring of exams), in the safety components of products (e.g. robot-assisted surgery), in employment (e.g. CV sorting software), in private and public services (e.g. assessing credit score), in criminal prosecution infringing on fundamental human rights (e.g. evidence reliability evaluation), in migration, asylum, and border control (e.g. processing visa applications), and in the execution of justice and democratic processes (e.g. system for searching court judgments).

The systems will need to have built-in mechanisms for adequate risk assessment and mitigation, with a dataset comprehensive enough to minimize the risk of discriminatory outcomes. Additionally, activity logging will be necessary to ensure traceability of results, detailed documentation containing all necessary information about the system and its purpose will need to be maintained, enabling authorities to assess its compliance. Lastly, the system will have to provide clear information to the user, appropriate human oversight, and a high level of security and accuracy.


In cases of limited risk, potential risks arise due to the lack of transparency in the use of artificial intelligence. These are systems that communicate directly with users (e.g. chatbots) and for which the new legislation foresees specific obligations regarding transparency to ensure citizens are informed. In practice, this means that individuals using an artificial intelligence system must be notified that they are interacting with a machine. Additionally, the content generated by artificial intelligence must be clearly labeled as artificially generated. This applies particularly to audio and video content representing the so-called deep fakes. Based on this information, individuals can then make an informed decision whether to continue using the system or not.


The new Artificial Intelligence Act envisages the free use of artificial intelligence systems with minimal risk, including video games based on artificial intelligence and spam filters. Currently, the vast majority of systems in use in the European Union fall into the category of minimal risk.

Following formal approval by the EU Council, the act will then enter into force 20 days after its publication in the Official Journal of the European Union, and its implementation will begin gradually. It will be fully applicable 24 months after its entry into force, with the following exceptions: prohibitions related to artificial intelligence will take effect after 6 months, codes of practice will become effective after 9 months, general-purpose artificial intelligence rules including governance systems after 12 months, and obligations related to high risk systems 36 months after the date of entry into force.

Artificial Intelligence Liability Directive and proposal for the revision of the Directive on Liability for Defective Products

In connection with the Artificial Intelligence Act, it is necessary to highlight the current lack of regulation of artificial intelligence liability at the European Union level, which constitutes a major obstacle for companies wishing to use it. Therefore, the European Commission submitted a legislative proposal for Artificial Intelligence Liability Directive in 2022, aiming to establish effective enforcement of liability for damages in the field of artificial intelligence. Additionally, a legislative proposal for the revision of the Directive on Liability for Defective Products was submitted, covering non-fault product liability. Through this revision, the legislator aims to update and appropriately adjust the directive adopted in the 1980s, expanding its scope to software, which also includes artificial intelligence systems.

The proposal for the first directive introduces two new institutes: the disclosure of evidence (the court may request the disclosure of evidence regarding a specific high risk system, if there is suspicion that it caused damage, provided the plaintiff proves the credibility of their claim), and rebuttable presumption of causation (a presumption is established in favor of the plaintiff between the fault of the defendant and the data generated by the artificial intelligence system), while in the revised directive, the definition of defect is changed (it now occurs when the safety of the product does not meet the expectations of the broader public). All changes represent a significant step towards legal regulation of the liability of artificial intelligence systems.

Since artificial intelligence is a rapidly evolving technology, the legislator has anticipated an approach capable of keeping pace with its development while also allowing for future adaptation of rules to technological changes. This will continuously ensure safe and transparent use of artificial intelligence systems, which promise significant economic and broader societal benefits.