A six-month pause in development for the training of AI systems that are more powerful than GPT-4: This is what almost 2000,<> signatories of an open letter published on Tuesday on the website of the "Future of Life Institute" - a non-profit organization that advocates responsible and low-risk use of transformative technologies - are currently demanding. Advanced artificial intelligence could herald a fundamental change in the history of life on Earth and this must be planned with due care, the letter says.

Sibylle Anderl

Editor in the arts section, responsible for the section "Nature and Science".

  • Follow I follow

Of particular concern are the risks of propaganda and misinformation, the loss of jobs and a general loss of control. The further development of powerful AI systems should therefore only take place when it is clear that these risks can be controlled. Among the signatories are well-known names such as Apple co-founder Steve Wozniak and Elon Musk – although the latter has not necessarily distinguished himself as an entrepreneur with particularly high moral standards.

"Concern that we can't keep up with regulation"

From Germany, professors Ute Schmidt, who heads the Cognitive Systems working group at the University of Bamberg, and Silja Vöneky, head of the FRIAS research group Responsible AI at the University of Freiburg, are also involved. Speaking to the German Science Media Center (SMC), Schmidt justified her involvement with the need to point out the risks of using large language models and other current AI technologies. One must try to enter into a broad democratic discourse in which AI experts from research institutes and large tech companies actively participate.

As a professor of international law and legal ethics, Vöneky emphasized in particular the lack of a suitable legal framework: "My concern is that we will not be able to keep up with regulation. The EU's AI regulation is not yet in force – and even classifies these systems as low risk, so it hardly regulates them." The same applies to the Council of Europe Convention on Human Rights on AI. There is no other binding international treaty on AI.

The EU's AI regulation, which has been in existence for two years, is currently being negotiated and could be adopted this year at the earliest. At its core, it consists of a risk-based, three-tier regulatory approach that distinguishes between intolerable AI systems, high-risk AI systems and low-risk ones. Chatbots like ChatGPT would fall under the latter category. Even if the regulation actually came into force in two years at the earliest, there would be no changes for the technologies criticized in the open letter. Vöneky criticises this sluggishness: regulation has so far been thought of as 'static' and "cannot react quickly enough to new risk situations due to new technical developments".

A temporary stop to research could, at least theoretically, serve to give politics and jurisprudence the opportunity to make up for lost time. "A moratorium would have the advantage that regulations could be proactively decided before research progresses," Thilo Hagendorff, research group leader at the University of Stuttgart, told SMC. At the same time, however, he takes a critical view of the statement: "The moratorium ultimately serves precisely those institutions whose activities are actually to be problematized. It suggests completely exaggerated capabilities of AI systems and stylizes them into more powerful tools than they actually are."

The moratorium thus fuels misunderstandings and false perceptions about AI and thus distracts from factual problems – or even intensifies them. After all, exaggerated expectations and too much trust in the new powerful language models are factors that further promote the lamented loss of control and the risk of revealing intimate information or not sufficiently checking the information provided.

In any case, it remains completely unclear how a research stop could be controlled and punished at all. This is already evident from the fact that the demand to slow down more powerful systems than GPT-4 is not clearly defined: Due to a lack of transparency regarding the technical details and possibilities of this language model of the company OpenAI, it would be difficult to decide which models are affected. In addition, a development stop also entails risks. Thilo Hagendorff sees this illustrated by various scenarios: "If a query to a language model can provide better answers than human experts, then this makes the entire knowledge work more productive. In extreme cases, it can even save lives. Language models in medicine, for example, are a great opportunity to save more lives or reduce suffering."

Italy, meanwhile, has already created facts. The Italian Data Protection Authority took alleged violations of data protection and youth protection rules as an opportunity to ask the company OpenAI to stop the application in Italy. Nello Cristianini, Professor of Artificial Intelligence at the University of Bath, interpreted this to the British SMC as confirmation that the open letter had hit a valid point: "It is not clear how these decisions will be enforced. But the mere fact that there seems to be a discrepancy between technological reality and the legal framework in Europe suggests that there may be some truth to the letter signed by various AI entrepreneurs and researchers two days ago."