Sunday, August 3, 2025

Creating liberating content

Introducing deBridge Finance: Bridging...

In the dynamic landscape of decentralized finance (DeFi), innovation is a constant,...

Hyperliquid Airdrop: Everything You...

The Hyperliquid blockchain is redefining the crypto space with its lightning-fast Layer-1 technology,...

Unlock the Power of...

Join ArcInvest Today: Get $250 in Bitcoin and a 30% Deposit Bonus to...

Claim Your Hyperliquid Airdrop...

How to Claim Your Hyperliquid Airdrop: A Step-by-Step Guide to HYPE Tokens The Hyperliquid...
HomeTechnologyThe world's first...

The world’s first Artificial Intelligence Law comes into force in the EU: key points and objectives

The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere.

As of today, the European Union is in force Artificial Intelligence Law (AI), the first in the world to regulate systems that allow us to act more precisely and efficiently than humans in many areas and promote the innovationbut which at the same time raise important risks which the new common framework seeks to avoid.

These are the five main keys of the new standard:

Goals

The main objectives are to establish a harmonised legal framework in the European Union for the development, commercialisation, implementation and use of Artificial Intelligence (AI) systems, an area that can generate many benefits but also involves risks. It is also intended to boost innovation and establish Europe as a leader in the sector.

To whom does it apply?

The rules apply to providers of AI systems that are put into service or placed on the market within the EU or whose output is used in the EU, regardless of their origin. They also apply to users of such systems, considering as users those who operate such systems.

It does not apply to public authorities in third countries or international organisations when using AI systems in the field of police or judicial cooperation with the EU, nor to systems for military use or used in the context of national security, or those used for the sole purpose of scientific research and development.

The new law puts a significant emphasis on transparency. Companies must inform users when they are interacting with an AI system, whether on phone calls or in chats where chatbots interfere.

Types of AI systems

Prohibited: Some AI systems or uses of AI are prohibited because they contradict EU values, including the right to non-discrimination, data protection and privacy. These include those that deploy subliminal techniques to distort a person’s behaviour in ways that may cause physical or psychological harm to them or others, biometric categorisation systems, indiscriminate capture of facial images from the internet, emotion recognition in the workplace and schools, systems for “scoring” people based on their behaviour or characteristics, predictive policing and AI that manipulates human behaviour or exploits people’s vulnerabilities.

However, the regulations allow exceptions. Real-time biometric identification systems may only be used if a number of safeguards are met. These may include the selective search for a missing person or the prevention of a terrorist attack. Using these systems after the fact is considered a high-risk use, which requires judicial authorisation as it is linked to a criminal offence.

High-risk: High-risk AI systems may pose a potentially high risk to the rights and freedoms of natural persons and are therefore subject to strict obligations.

Systems with transparency requirements: Those responsible for these services must comply with requirements and provide information so as not to mislead consumers into believing that they are interacting with real people or with content created by them, for example, chatbot owners or creators of ‘deepfakes’.

General purpose AI systems: They have no initial intended purpose, but can be trained or modified to serve a purpose that could make them high-risk systems.

Fines

Fines will be adjusted according to the circumstances and will take into account the size of the supplier. For those who fail to comply with the regulations, fines ranging from 35 million euros or 7% of the company’s global turnover to 7.5 million euros or 1.5% of the company’s global turnover are foreseen.

Phases in the implementation of the new law

Following its entry into force on 1 August, it will be fully applicable twenty-four months later, with the exception of the prohibitions on practices, which will apply six months after the date of entry into force, that is, in February 2025.

In August 2025, the rules for general-use models, such as ChatGPT, will begin to apply, and a year later, in August 2026, the law will apply generally, except for some provisions.

Obligations for high-risk systems will begin to apply 36 months later, in August 2027.

Source: Eitb

Get notified whenever we post something new!

Continue reading

What are the blue screens that appear on Microsoft computers after a crash?

Commonly known as the "screen of death" is exclusive to the Microsoft Windows operating system and appears when the system is unable to recover from an error. ...

Microsoft crashes worldwide, causing problems for many companies

The failure was due to an update problem with an antivirus from the company CrowdStrike. The failure has caused chaos at Aena airports, and multiple delays have been recorded. There are incidents at Osakidetza with online appointments and at...

MU installs an ultra-precision machine capable of making cuts 100 times thinner than a human hair

It is the machine with the most exact precision on the market that exists, the first in the Spanish State. This type of machines are used in the aerospace, electronic, medical and ophthalmological industries. ...