The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere.
As of today, the European Union is in force Artificial Intelligence Law (AI), the first in the world to regulate systems that allow us to act more precisely and efficiently than humans in many areas and promote the innovationbut which at the same time raise important risks which the new common framework seeks to avoid.
These are the five main keys of the new standard:
Goals
The main objectives are to establish a harmonised legal framework in the European Union for the development, commercialisation, implementation and use of Artificial Intelligence (AI) systems, an area that can generate many benefits but also involves risks. It is also intended to boost innovation and establish Europe as a leader in the sector.
To whom does it apply?
The rules apply to providers of AI systems that are put into service or placed on the market within the EU or whose output is used in the EU, regardless of their origin. They also apply to users of such systems, considering as users those who operate such systems.
It does not apply to public authorities in third countries or international organisations when using AI systems in the field of police or judicial cooperation with the EU, nor to systems for military use or used in the context of national security, or those used for the sole purpose of scientific research and development.
The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere.
Types of AI systems
Prohibited: Some AI systems or uses of AI are prohibited because they contradict EU values, including the right to non-discrimination, data protection and privacy. These include those that deploy subliminal techniques to distort a person’s behaviour in ways that may cause physical or psychological harm to them or others, biometric categorisation systems, indiscriminate capture of facial images from the internet, emotion recognition in the workplace and schools, systems for “scoring” people based on their behaviour or characteristics, predictive policing and AI that manipulates human behaviour or exploits people’s vulnerabilities.
However, the regulations allow exceptions. Real-time biometric identification systems may only be used if a number of safeguards are met. These may include the selective search for a missing person or the prevention of a terrorist attack. Using these systems after the fact is considered a high-risk use, which requires judicial authorisation as it is linked to a criminal offence.
High-risk: High-risk AI systems may pose a potentially high risk to the rights and freedoms of natural persons and are therefore subject to strict obligations.
Systems with transparency requirements: Those responsible for these services must comply with requirements and provide information so as not to mislead consumers into believing that they are interacting with real people or with content created by them, for example, chatbot owners or creators of ‘deepfakes’.
General purpose AI systems: They have no initial intended purpose, but can be trained or modified to serve a purpose that could make them high-risk systems.
Fines
Fines will be adjusted according to the circumstances and will take into account the size of the supplier. For those who fail to comply with the regulations, fines ranging from 35 million euros or 7% of the company’s global turnover to 7.5 million euros or 1.5% of the company’s global turnover are foreseen.
Phases in the implementation of the new law
Following its entry into force on 1 August, it will be fully applicable twenty-four months later, with the exception of the prohibitions on practices, which will apply six months after the date of entry into force, that is, in February 2025.
In August 2025, the rules for general-use models, such as ChatGPT, will begin to apply, and a year later, in August 2026, the law will apply generally, except for some provisions.
Obligations for high-risk systems will begin to apply 36 months later, in August 2027.
Source: Eitb

With a background in journalism and a passion for technology, I am an experienced writer and editor. As an author at 24 News Reporter, I specialize in writing about the latest news and developments within the tech industry. My work has been featured on various publications including Wired Magazine and Engadget.