They sound the alarm. While the machine is capable of generating photorealistic images, passing the US bar exam or medical entry, Elon Musk and hundreds of global experts on Wednesday signed a call for a six-month pause in research on artificial intelligences more powerful than GPT 4, the OpenAI model launched in mid-March. by evoking "major risks for humanity".

In this petition published on the futureoflife.org website, they call for a moratorium until the implementation of security systems, including new dedicated regulatory authorities, monitoring of AI systems, techniques to help distinguish the real from the artificial and institutions capable of managing the "dramatic economic and political disruption (especially for democracy) that AI will cause".

"Potential for manipulation"

The petition brings together personalities who have already publicly expressed their fears about uncontrollable AIs that would surpass humans, including Elon Musk, owner of Twitter and founder of SpaceX and Tesla, and Yuval Noah Harari, the author of "Sapiens".

Yoshua Bengio, a Canadian AI pioneer, also a signatory, expressed his concerns during a virtual press conference in Montreal: "I do not think that society is ready to face this power, the potential for manipulation of populations for example that could endanger democracies."

"We must therefore take the time to slow down this trade race that is on the way," he added, calling for these issues to be discussed at the global level, "as we have done for energy and nuclear weapons."

Researchers are divided on recent progress. Some believe that GPT-4 shows snippets of what could lead to a "strong" artificial intelligence, capable of learning as humans do. Others, such as French pioneer Yann Lecun, are more reserved.

"Society needs time to adapt"

Sam Altman, head of OpenAI, developer of chatGPT, himself admitted to being "a little scared" by its creation if it was used for "large-scale disinformation or cyberattacks". "Society needs time to adapt," he told ABCNews in mid-March. But he did not sign this moratorium.

"Recent months have seen AI labs locked into an uncontrolled race to develop and deploy ever more powerful digital brains that no one – not even their creators – can reliably understand, predict or control," they said.

"Should we let machines flood our information channels with propaganda and lies? Should we automate all jobs, including those that are rewarding? Should we develop non-human minds that could one day be more numerous, smarter, more obsolete and replace us? Should we risk losing control of our civilization? These decisions should not be delegated to unelected technology leaders," they conclude.

The signatories also include Apple co-founder Steve Wozniak, members of Google's AI lab DeepMind, Stability AI boss Emad Mostaque, OpenAI's competitor, as well as American AI experts and academics, senior engineers from Microsoft, an OpenAI ally.

  • Tech
  • ChatGPT
  • Artificial Intelligence (AI)