To ensure human-centric and ethical development of artificial intelligence in Europe, the European Parliament has approved new transparency and risk management rules for AI systems at committee level. Today, the Committee on the Internal Market and the Committee on Civil Liberties adopted a draft negotiating mandate on the first-ever standards for artificial intelligence with 84 votes in favour, 7 against and 12 abstentions.

In their amendments to the Commission's proposal, MEPs aim to ensure that AI systems are supervised by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can be applied to the AI systems of today and tomorrow. Before negotiations can begin with the Council on the final form of the law, this draft negotiating mandate must be approved by the whole Parliament, with the vote scheduled during the session of 12-15 June.

The rules follow a risk-based approach and set obligations for providers and users depending on the level of risk that AI can generate. Artificial intelligence systems with an unacceptable level of risk to the safety of people would be strictly prohibited, including systems that employ subliminal or intentionally manipulative techniques, exploit people's vulnerabilities or are used for social scoring (classification of people according to their social behavior, socio-economic status, personal characteristics).

MEPs expanded the classification of high-risk areas to include damage to people's health, safety, fundamental rights or environment. They also added artificial intelligence systems to influence voters in political campaigns and recommendation systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list. MEPs included obligations for providers of foundation models – a new and rapidly evolving development in the field of AI – which should ensure robust protection of fundamental rights, health and safety and the environment, democracy and the rule of law. They should assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

Generative base models, such as GPT, should comply with additional transparency requirements, such as revealing that content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training. To promote AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open source licenses. The new law promotes regulatory sandboxes, or controlled environments, set up by public authorities to test artificial intelligence before its implementation.

MEPs want to strengthen citizens' right to complain about AI systems and receive explanations of decisions based on high-risk AI systems that significantly affect their rights. MEPs also reformed the role of the EU's AI Office, which would be tasked with monitoring the implementation of the AI regulation.