On March 22, 2023, the non-profit Future of Life Institute, a think tank based in Cambridge, Massachusetts, published an open letter titled "Pause Giant AI Experiments: An Open Letter," signed by Elon Musk, George Dyson, Steve Wozniak, and Yuval Noah Harari, among others: "Stop the Gigantic Experiments with Artificial Intelligence!".

It threatens nothing less than the end of the world if artificial intelligence is simply further developed without regulation. That's why a break of at least six months is needed. The alarm was probably triggered by ChatGPT, this groundbreaking large language model that can impress with its eloquence, general knowledge and ignorance. Anyone who has chatted with ChatGPT knows: this is going to be big, this is going to change the world.

But is eloquence synonymous with intelligence? One should be aware of how a large language model works: An artificial neural network is trained, in the case of ChatGPT with a lot of text, also from the Internet; exactly which texts were chosen for this is not entirely clear.

ChatGPT then learns how likely one word is to follow the other and returns the words accordingly to each question. So, when ChatGPT is asked what Berlin is, it spits out that Berlin is the capital of Germany. Not because it has an idea of what Berlin is, what a city is, or where Germany is, but because it is the most statistically likely answer. If I were to train a neural network on text claiming that Berlin is the name of a Chinese noodle dish, it would give me this answer when asked.

The Internet is already full of propaganda

Experience has shown that even people whose intelligence can be doubted can cause harm. But what damage do Musk and his co-signatories fear in the case of artificial intelligence? In the open letter, they are formulated as questions. The first, namely whether we want the machines to flood our information channels with propaganda and untruth, is not devoid of comedy – after all, it was Elon Musk who fired almost all the people at Twitter who were involved in moderation.

In addition, just last week, Musk unhooked the EU's Voluntary Code of Practice against Disinformation, through which Twitter pledged, among other things, to prevent the spread of disinformation via online advertising. The internet is already full of propaganda and untruth, the only thing that could change through AI is that there will be more propaganda and untruth. The problem is not solved technically, but socially.

Should we, the open letter asks, automate all jobs, including those that promise fulfillment and satisfaction? Even this distinction between jobs that fulfill and those that don't should be a hint to everyone as to which jobs should definitely be automated. With what justification should people do dull work for little money, when a machine could do it too, which does not become depressed because it simply cannot become depressed?

In the history of capitalism and technology, every, really every technology has been used sooner or later to rationalize jobs away. This went so far that, for example, in the metalworking industry, skilled workers were replaced by machines that delivered poorer quality, but did not want to have a wage for it and did not go on strike. Which shows us that it's not the technology that's the problem. Ms. Müller would certainly have no problem with the fact that her job at a customer service hotline will be done by a large language model in the future, if only she would continue to receive her wages. Naturally, this will not happen. But is this the fault of the large language model?