350 global experts in a statement: Artificial intelligence is a danger like epidemics and nuclear war

More than 350 AI experts around the world, including the creator of ChatGPT himself, have warned that the technology could lead to the extinction of humanity.

In a joint statement, supported by CEOs of leading AI companies, they said mitigating this risk "must be a global priority alongside other societal risks, such as pandemics and nuclear war."
Many experts have expressed concern about the dangers of using models such as ChatGPT to spread misinformation and cybercrime, as well as disrupting jobs at the community level.
The statement, coordinated by the U.S. Center for Artificial Intelligence Safety (CAIS), also acknowledged these concerns, but said that more serious, but less likely, threats should also be discussed, including the possibility that accelerating AI could lead to the collapse of civilization.

Some computer scientists fear that super artificial intelligence with interests incompatible with human interests could inadvertently replace or destroy us.

Others worry that over-reliance on systems we don't understand leaves us at catastrophic risk if something goes wrong.
Dan Hendricks, director of CAIS, explained that the statement was a means of expression for many researchers. "People were too scared to talk earlier."
Other academics dismissed the statement as unhelpful. Dr Mehiri Aitkin, an ethics research fellow at the Alan Turing Institute, described it as a "distraction" from the more pressing threats from AI.

She added: "The super AI narrative is a familiar plot from countless blockbuster Hollywood movies."

Dr Carissa Velez, from the Institute of Ethics in Artificial Intelligence at the University of Oxford, was skeptical of the motives of some of the signatories.

"I fear that focusing on the existential threat distracts from the most pressing issues, such as the erosion or demise of democracy, that CEOs of certain companies do not want to face," she said. "AI can cause massive destruction without existential risk."
Concerns about the most serious threats from artificial intelligence range from the possibility of it being used by humans to design biological weapons, to the artificial intelligence itself that is planning the collapse of civilization.