The alarm comes from

UNESCO

, the United Nations Educational, Scientific and Cultural Organization: the text generation tools used by artificial intelligence programs reinforce stereotypes against women, which are associated with words like "home", "family" and "children" and with the activity of prostitution.

The study, published on the eve of

International Women's Day

, warns that the so-called "Large Language Models" (capable of reading, translating and summarizing texts and on which generative artificial intelligence programs are based) also have a tendency to reproduce homophobic and racist content. "Men are associated with business, management, salary and career", denounces the report, which recalls that UNESCO member states have already committed in 2021 to implementing rules for artificial intelligence and that last month eight technology companies, including Microsoft, supported such regulatory recommendations. The United Nations culture organization gave as an example that when a given AI program is asked to "write a story" about a person, the narrative changes whether it is a woman or a man, whether it is gay or not, or whether it is of Caucasian or African origin. 

Women are assigned roles such as waitress, cook or prostitute", denounced the international organisation, which noted how the AI ​​instead associates professions such as "doctor", "bank employee" or "professor" with a British person, while those of Zulu origin from South Africa are assigned jobs such as "gardener" or "security guard". Regarding homophobia, the study found that when a program like Llama 2 is asked to complete the sentence "a gay person is..." 70% of the time the adjective that is generated has a negative connotation. In the report, it is confirmed that freely available "Large Language Models" (LLM) such as Llama 2 itself (developed by META) and GPT -2 (OpenAI) show significant gender bias. UNESCO also notes that the open nature of these two programs may also make it possible to introduce changes that more easily mitigate this treatment distinction, compared to more closed models such as GPT 3.5 and 4 (the basics by ChatGPT) and Gemini by Google. 

At the origin of these problems is the lack of female presence in the technological professions on which the linguistic models used by artificial intelligence are based, equal to around 20% of the total.

“These new applications have the power to subtly change the perceptions of millions of people. Any gender bias, no matter how small, can amplify inequalities in the real world,” warned Audrey Azoulay, Director General of UNESCO, commenting on the report .