Well-known researchers in artificial intelligence and well-known technology entrepreneurs such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak are calling in a dramatic appeal to immediately interrupt the development of huge AI systems. "AI systems with human-competitive intelligence can pose serious risks to society and humanity," says the open letter published on the website of the American think tank "Futere of Life Institute" – and further: "Therefore, we call on all AI labs to immediately interrupt the training of AI systems that are more powerful than GPT-4 for at least six months. This pause should be public, verifiable and involve all key stakeholders." If such a pause cannot be implemented quickly, governments should intervene and impose a moratorium.

Alexander Armbruster

Editor responsible for Wirtschaft Online.

  • Follow I follow

In addition to Musk and Wozniak, the signatories of the appeal include Pinterest co-founder Evan Sharp and a large number of renowned computer scientists and experts specializing in AI, including Stuart Russell, Grady Booch, Turing Award winner Yoshua Bengio, MIT physicist Max Tegmark and New York scientist Gary Marcus, but also representatives of other disciplines such as the Israeli historian Yuval Noah Harari. who has become world-famous with several bestsellers on the social consequences of technological progress. In total, more than 1000 people have now signed the letter.

"Stop dangerous race"

The authors make it unmistakably clear what they believe it is all about. "Today's AI systems are becoming more and more competitive for humans in general tasks, and we have to ask ourselves: Should we allow machines to flood our information channels with propaganda and falsehoods? Should we automate all jobs, even fulfilling ones?" they ask, and then go on to point: "Should we develop non-human intelligences that could eventually outnumber, outwit us, make us superfluous and replace us? Should we risk losing control of our civilization?"

The task of providing answers and making decisions should not be delegated to "unelected technology leaders." "Powerful AI systems should only be developed when we are sure that their effects are positive and their risks manageable. This trust must be well-founded and increase with the magnitude of the potential impact of a system."

Criticism of "black box models"

In the call, the authors repeatedly refer explicitly to the American AI company Open AI, which has just publicly announced the latest version of its more general AI system under the name GPT-4, which can not only deal with speech and text in a more versatile way than its predecessor, but also with images. The Californian start-up became known around the globe when it made a user interface based on its AI accessible to everyone under the name ChatGPT last autumn. Many millions of people around the world tried out the AI system and experienced for the first time how advanced this technology has become when it comes to answering questions or writing longer amounts of text. The program also stunned many professionals.

For years, however, not only Open AI has been investing enormous resources with the help of the Internet company Microsoft in developing such huge language AI systems, no, all leading tech companies such as Alphabet (Google) or Meta (Facebook), but also Chinese universities are working on it. In Germany, the Heidelberg-based start-up Aleph Alpha is developing such systems. Especially since ChatGPT has become widely available, the discussion has increased noticeably about the potential of this approach and how powerful AI systems can become if they were simply designed larger and larger with these methods, equipped with even more computing power and trained even more data.

At the same time, the authors of the call make it clear that they do not want to stop the further development of AI systems as a whole. They demand "only that the dangerous race to ever larger, unpredictable black-box models (...) is stopped'. With the "black box model", they point out that it is often not clear why a huge voice AI arrived at exactly the statement it made.

"AI research and development should focus on making today's powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned, trustworthy and loyal," the call authors continue. "In parallel, AI developers need to work with policymakers to dramatically accelerate the development of robust AI governance systems." In her view, these should include new and competent regulators specifically dealing with AI, the monitoring and tracking of high-performance AI systems, and a robust testing and certification system.