It is a drastic warning from an extremely prominent place: A group of scientists and managers, including Sam Altman, the CEO of Open AI, has now warned in a statement that artificial intelligence (AI) poses an "extinction risk" for humanity. Mitigating this risk should be a "global priority," similar to other risks affecting society as a whole, such as pandemics and nuclear wars. This is the latest warning of AI risks, some of which comes from within the industry itself. In March, there was already an open letter with representatives from research and industry calling for a six-month moratorium on work on particularly advanced AI systems. This letter was signed by Elon Musk, the CEO of the electric car manufacturer Tesla, who once co-founded Open AI. It struck a similar tone with regard to possible AI risks, although the choice of words in the statement now published is even more dramatic. The new warning also carries particular weight because Sam Altman has joined it. Since Open AI caused a sensation with the launch of its AI language model ChatGPT last November, there has been increased competition in the industry to develop such technologies, and at the same time, the discussion about possible downsides has taken on an additional urgency.

The new warning was issued by the Center for AI Safety, a not-for-profit organization that describes it as its mission to reduce AI risks to society. The statement has around 350 signatures, including Sam Altman and a number of other prominent names. For example, Demis Hassabis, the CEO of the AI laboratory Deepmind, which belongs to the Internet company Google, and several top managers of the software company Microsoft, which maintains a close alliance with Open AI. At the top of the signatures is Geoffrey Hinton, who is considered a pioneer in this field and is often called the "godfather of artificial intelligence". Hinton worked for Google until recently, but since his resignation a few weeks ago, he has repeatedly publicly warned of AI risks in gloomy terms. In doing so, he has already drawn the comparison with nuclear weapons.

Among other things, the rapid advances in AI systems raise concerns that such tools facilitate the spread of propaganda and make millions of jobs obsolete. Some warnings, such as the statement now published, go so far as to portray AI as an existential threat to humanity. Sam Altman recently said at a hearing before the US Congress: "If something goes wrong with this technology, then it can go quite wrong." While Altman has spoken out in favor of comprehensive regulation, he has rejected a moratorium, as Musk has called for. Musk no longer has any connection to Open AI, but it was recently revealed that he himself has founded a new AI company.