More careful. Meta unveils its own version of artificial intelligence

Facebook's owner, Meta, has unveiled its own version of artificial intelligence used in apps such as ChatGPT, saying it will allow researchers to find solutions to the potential risks of the technology.

Meta described its artificial intelligence software, called LLaMA (LLAMA), as a "smaller, better-performing" model "designed" to help researchers develop their work, in what could be seen as a veiled criticism of Microsoft's decision to release the technology on a large scale, while keeping the programming code secret.

Microsoft's ChatGPT software has caused a stir in the world with its ability to create elaborate texts such as articles or poems in just seconds using a technique known as Large Language Models (LLM-LMLM).

LLM technology is part of a field known as generative artificial intelligence which also includes the ability to act with images, designs, or programming code almost instantaneously upon a simple request.

Microsoft has deepened its partnership with OpenAI, creator of ChatGPT, announcing earlier this month that the technology will be integrated into its Bing search engine as well as the Edge browser.

Google, which sees a sudden threat to its search engine dominance, announced that it will soon launch its own artificial intelligence language, with the Bard name.

But reports of vulnerabilities in communications with Microsoft's Ping Engine's chatbot, including threats and talk of wanting to steal nuclear code, have gone viral, giving the impression that the technology is not yet ready.

Meta noted that these problems, demonstrated by artificial intelligence via chatbots, which some have likened to hallucinations, could be better treated if researchers could improve access to the expensive technology.

The company said that comprehensive research is still limited due to the resources required to train and operate such large models.

This hinders efforts to improve the technology's capabilities and mitigate known problems, such as bias and the potential for generating misinformation, Meta said.