Billionaire Elon Musk, Apple co-founder Steve Wozniak and former US presidential candidate Andrew Yang have joined hundreds of other prominent figures and managers to formally ask AI developers to stay for at least six months. The reason? Innovation in this context proceeds too quickly, the risk is that we reach a point where AI can take over our lives, not to overwhelm us but simply radically changing life as we know it today. But is there really this risk? We ask the journalist, technology expert, Antonino Caffo.

What happened with Musk and associates' letter?
Well, that contemporary artificial intelligence systems are becoming competitive for humans in general tasks is true. On this assumption is based the document that over a thousand prominent personalities, including Elon Musk, have signed and published on the website of the Future of Life Institute, a non-profit organization. "We call on all AI labs to stop immediately for at least six months in training the most powerful AI systems like GPT-4. The pause should be public, verifiable and include everyone. If it is not implemented quickly, governments should intervene and institute a moratorium," the letter reads. Moreover, the collective is not new to such operations: in 2015 it had already published a letter in which it supported the development of artificial intelligence for the benefit of society, but without neglecting the risks and potential dangers of its unbridled adoption and without clear rules. The latest open letter, signed by 1,124 people, points the finger at OpenAI's GPT-4 programming language. For Musk and Co., the update to the popular chatbot, which companies like Microsoft have also integrated into an experimental version of Bing's web search, is a warning sign. The current version, more accurate, can dialogue in a similar way to humans, through text, in a certain sense also deceiving an interlocutor unaccustomed to recognizing so-called bots from human 'agents'. Moving forward in development, without looking back, could become a problem.

But is the risk really that high?
Personally I've been using the freely accessible web tool called Bing Image Creator for a few days. It is based on Dall-E, another artificial intelligence module that transforms text into images. In a few seconds anyone can create a digital work, today free from copyright. Two problems are already evident here. The first is social: how long will it be before many creatives, especially those of the most basic level, are replaced by software? Then the second point: what will happen to copyright? If I ask Image Creator of Bing to create a landscape of Rome, with the sea, I get a fake creation, which does not exist, but with elements taken from reality: a glimpse of Rome, maybe the Colosseum, with a part of the beach and sea, maybe Rimini. Where did that individual content come from? Who took them and how will the original author be paid? It will not be paid, and this is a serious regulatory loophole.

A few days ago, at the Texas conference on contemporary arts, SWSX 2023, the co-founder of OpenAI, the organization that develops the ChatGpt chatbot on which Microsoft has invested a lot of money, said that artificial intelligence will not steal our jobs but simply change the current way of working of some professions, freeing them from repetitive tasks and leaving more room to do something else. It may be true but in the transition period many people will be replaced by software, on this there is little doubt.

All this within an international framework not defined by standards and instruments?
Exact. For this reason, beyond the general fears, what is true is that at some point it will be important to obtain an independent review from specialized bodies before starting to train future systems and agree on any limits for the growth rate of the calculation used for the creation of new models. Suffice it to say that already within the year ChatGpt should be published in generation 5, the one considered "AGI", or "Artificial General Intelligence", capable of being indistinguishable from humans, when it comes to chatting or writing texts. If we combine these developments with the ever-increasing threat of cybercriminals, it is clear that we are faced with a scenario in which, on the merits of digital content, distinguishing between truth and fiction will become very difficult. Is that photo on social media true or not? Is the video posted by the YouTube channel reality or built at the table? Does the email asking me to click on a link to open a file really come from my colleague? As always, the problem itself is not technological innovation but the use we make of it.

What is the solution proposed by the Future of Life Institute?
Simple: take a step back. It is not just a matter of pausing the various projects but of stopping, reflecting on the objectives achieved and understanding if, to achieve future ones, we must make choices from which there is no turning back. For example, making ChatGpt available globally has raised the question of ethical AI to a wider audience but has also accelerated, perhaps unexpectedly, the integration of advanced languages into many software, since OpenAI is a foundation and allows developers to use GPT for their projects and apps. In the words of the signatories, the proposed pause should be seen as a way to make AI development "more accurate, safe, interpretable, transparent, trustworthy and fair", while companies will have to work alongside legislators to create AI management systems.