The head of ChatGPT inventor OpenAI sees the risk of spreading misinformation with the help of artificial intelligence – and has spoken out in favour of strict regulation. Because of the massive resources required, there will be few companies that can be pioneers in training AI models, Sam Altman said Tuesday at a hearing in the U.S. Senate in Washington. They would have to be under strict supervision.

Altman's OpenAI was instrumental in triggering the current AI hype with the text machine ChatGPT and the software that can generate images based on text descriptions.

ChatGPT formulates texts by estimating word for word the probable continuation of a sentence. One consequence of this procedure is currently that the software invents not only correct information but also completely false information – but no difference is recognizable to the user. That is why there are fears that their skills could be used, for example, to produce and disseminate misinformation. Altman also expressed this concern at the hearing.

Government agency to scrutinize AI models

Altman proposed the creation of a new government agency that can put AI models to the test. A series of security tests should be provided for artificial intelligence – for example, whether they could spread independently. Companies that do not comply with prescribed standards should have their licenses revoked. The AI systems should also be able to be tested by independent experts.

Altman acknowledged that AI technology could eliminate some jobs through automation in the future. At the same time, however, it has the potential to create "much better jobs".

During the hearing in a Senate subcommittee, Altmann did not rule out the possibility that OpenAI programs could also be available with advertising instead of subscriptions as is currently the case over time.