"For its danger to society." Musk and experts call for halting development of artificial intelligence systems

US billionaire Elon Musk and a group of artificial intelligence experts and executives called in an open letter for a six-month halt in developing systems more powerful than OpenAI's ChatGPT-4 chatbot, citing the potential risks such applications have to society.
Earlier this month, Microsoft-backed OpenAI unveiled the fourth version of its artificial intelligence software, ChatGPT, which has won users' admiration by engaging them in human-like conversation, helping them compose songs and summarize long documents.
"Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks will be under control," the letter from the Future of Life Institute said.

The EU's Transparency Registry said the non-profit organisation's main funders were the Musk Foundation, London-based Founders Pledge Group and Silicon Valley Community.

Musk said earlier this month: "Artificial intelligence is stressing me hard." Musk is one of the founders of leading OpenAI and his company, Tesla Automobiles, uses artificial intelligence in self-driving systems.

OpenAI did not immediately respond to a Reuters request for comment on the open letter, which called for a halt to the development of artificial intelligence systems until independent experts come up with common safety protocols.

"Should we allow machines to flood our media channels with propaganda and lies?" Should we develop non-human minds that are outnumbered and ultimately intelligent and surpass us and replace us?"

"Such decisions should not be delegated to unelected technology leaders."
More than a thousand people signed the letter, including Musk.

Sam Altman, Sander Pichai and Satya Nadella, CEOs of OpenAI, Alphabet and Microsoft, were not among the signatories to the letter.

The concerns come at a time when the chatbot GPT has attracted the attention of U.S. lawmakers, who have questioned its impact on national security and education.

Europol warned of the risk of the app being used for phishing attempts, spreading disinformation and cybercrime.

The British government has unveiled proposals for an "adaptable" regulatory framework around artificial intelligence.