New groundbreaking announcements in the field of artificial intelligence (AI) are now coming out almost every week. After Open AI only released GPT-4, the new version of the AI model behind ChatGPT, in March, Google is now following suit. At its I/O developer conference, the Internet company announced PaLM 2. Faster, bigger, better. In addition, the AI chatbot Bard is now available for the whole world.

It is understandable that many people are worried about whether development will get out of control at this enormous pace. Especially since even experts such as former Google AI expert Geoffrey Hinton warn in emphatic terms about the potential effects of the technology.

A moratorium on development, as called for by some experts and entrepreneurs, is nevertheless a bad idea. On the one hand, this is hardly enforceable. The proverbial spirit of generative artificial intelligence is out of the bottle. Technological advances are lowering the hurdles even for small developers to train or experiment with large AI models. In the meantime, there are already countless small alternatives apart from the large corporations. Who is supposed to control a development stop?

Moreover, today's models do not operate autonomously. If no one types anything, nothing happens. But there is no doubt that people can misuse the technology, for example for propaganda purposes and to spread misinformation. Certainly, new rules are needed for dealing with the technology.

But at the end of the day, only responsible development can make AI safer. Google, for example, is taking a step in the right direction with the announcement that it will watermark images generated by Google AI in the future. More of that is needed.