The European Union is the first in the world to adopt rules on artificial intelligence.

The new regulation, in the absence of US legislation, will establish how AI should be governed in the Western world.

At the heart of the new law is the need to address concerns about bias, privacy and other risks arising from the rapid evolution of technology.

The legislation will ban the use of AI to detect emotions in workplaces and schools and limit its use in high-risk situations, as well as possible (but not certain) restrictions on generative AI tools, such as ChatGPT, which had been brought to the attention of the Italian Guarantor a year ago, for how it collected and managed users' sensitive information.

We talk about the new legislation with Antonino Caffo, a technology expert journalist and collaborator, among others, of the Ansa Agency.

Caffo, the European Parliament approves a regulation law on Artificial Intelligence, the AI ​​Act. Someone has defined this important regulation as a "victory of democracy against lobbies".

Is that so?

Last June, when talk of the AI ​​Act began, more than 150 CEOs of major European companies such as Renault, Heineken, Airbus and Siemens sent an open letter to the European Parliament, the European Commission and the governments of the Member States of EU expressing their opposition to the draft law on artificial intelligence.

The reason?

The fear that, in that form, regulation could negatively impact competition and European technological sovereignty.

In the updated form, many of the requirements of the Artificial Intelligence Law, for example regarding data governance, have been revised, not always in a restrictive form, on the contrary.

The initial draft proposed by Parliament sought to extend a complete ban on mass biometric surveillance technologies, while the new law will allow law enforcement to use facial recognition software in public spaces.

This opens up a whole new series of services and offers from specialized companies but also to possible errors and so-called "bias", results that are not entirely correct but questionable.

Are we faced with truly complete regulation (too much for some observers)?

Like many EU regulations, the AI ​​Act will initially serve as consumer safety legislation, taking a “risk-based approach” to products or services using artificial intelligence.

The riskier an AI application is, the more scrutiny it will face.

Low-risk systems, such as content recommendation systems or spam filters, will only face light rules, such as disclosing that they are powered by artificial intelligence.

The EU expects most AI systems to fall into this category.

High-risk uses of AI, such as in medical devices or critical infrastructure such as water or electricity networks, face more stringent requirements such as using high-quality data and providing clear information to users.

Some uses of AI are banned because they are considered an unacceptable risk, such as social scoring systems that regulate people's behavior, some types of predictive policing, and emotion recognition systems in schools and the workplace.

Other banned uses include the scanning of faces by police in public using AI-powered remote “biometric identification” systems, except for serious crimes such as kidnapping or terrorism.

As is clear, nothing will be complete when we are faced with a broad-spectrum technology, theoretically applicable to any context of today's digital life.

The law will have to keep pace with innovation, updated based on the context.

What is the originality of this legislation?

Early drafts of the law focused on artificial intelligence systems that performed narrowly limited tasks, such as scanning resumes and job applications.

The astonishing rise of general-purpose AI models, exemplified by OpenAI's ChatGPT, has policymakers scrambling to keep up.

They added provisions for so-called generative AI models, the technology behind chatbot systems that can produce unique and seemingly lifelike responses, images and more.

Developers of general-purpose AI models, from European startups to OpenAI and Google, will have to provide a detailed summary of text, images, videos and other internet data used to train the systems and comply with EU copyright law.

AI-generated deepfake images, videos or audio of existing people, places or events should be labeled as artificially manipulated.

Further scrutiny is carried out for larger, more powerful AI models that pose "systemic risks", including OpenAI's GPT4, its most advanced system, and Google's Gemini.

The EU says it is concerned that these powerful AI systems could “cause serious incidents or be misused for large-scale cyber attacks.”

There is also concern that generative AI could spread “harmful biases” across many applications.

Companies providing these systems will need to assess and mitigate risks;

report any serious incidents, such as malfunctions that cause the death of someone or serious damage to health or property;

put cybersecurity measures in place and reveal how much energy their models use.

Who does the AI ​​Act apply to?

And for non-European companies?

The provisions will begin to come into force gradually, with EU countries required to monitor AI systems six months after the rules come into force in their legislative codes.

The rules for general purpose AI systems such as chatbots will begin to apply one year after the law comes into force.

The full set of regulations, including requirements for high-risk systems, will come into force by mid-2026.

As for enforcement, each EU country will set up its own AI watchdog body, where citizens will be able to lodge a complaint if they believe they have been the victim of a breach.

Meanwhile, Brussels will create an AI office responsible for enforcing and overseeing the law for general-purpose AI systems.

Violations of the AI ​​Act could result in fines of up to 35 million euros ($38 million), or 7% of a company's global revenue.

In the United States, President Joe Biden signed a broad executive order on artificial intelligence last October that should be supported by global laws and agreements.

Lawmakers in at least seven US states are working on their own AI legislation.

Chinese President Xi Jinping has proposed his Global AI Governance Initiative, and authorities have issued "provisional measures" for managing generative artificial intelligence, which applies to text, images, audio, video and other content generated for people at all inside the country.

Others, from Brazil to Japan, as well as global groups such as the United Nations and the G7, are moving to develop specific standards.

Is it a regulation that will favor better design of AI and its "training"?

The introduction of obligations relating to general purpose artificial intelligence (GPAI) models has caused confusion among businesses.

Distinguishing between GPAI models and GPAI systems and understanding their separation from high-risk systems poses challenges.

Many companies are faced with these distinctions, which contributes to uncertainty about applicable limits and compliance obligations.

Furthermore, the obligations relating to the GPAI models themselves are also still confusing.

For example, the requirement to make available a “sufficiently detailed summary” of training data is open to interpretation, at least until harmonized standards are published.

In conclusion, the AI ​​Act, while marking a significant step in the regulation of AI, is not without its challenges.

The industry must address the complexities and ambiguities in the legislation, seeking clarity through ongoing dialogue and engagement.

The diversity of perspectives highlights the need for continuous refinement and adaptation to ensure a balanced and effective regulatory framework.