Is an artificial intelligence capable of losing its composure and courtesy?

Yes, but only thanks to a bug, argues the British subsidiary of the delivery company DPD, which saw its chatbot adopt flowery language after a simple update.

Well, we must still recognize that the customer who was the victim of this “slippage” really looked for it.

Spotting unusual behavior from the DPD online help chatbot, Ashley Beauchamp had fun pushing the AI ​​to its limits by asking it to become insulting or to denigrate its “employer” in a haiku.

Fatal Haiku

And the chatbot obediently complied, the BBC website tells us.

Amused, Beauchamp immediately published the exchange on his X account (formerly Twitter).

Two of his messages were particularly successful since they were viewed more than 800,000 times in twenty-four hours.

The incident caused less laughter at DPD, which reacted in a press release by specifying that the error had occurred following an update of the online help system.

“The AI ​​element was immediately deactivated and is being upgraded,” it says.

Our “Artificial Intelligence” file

This disappointment is not the first and certainly not the last of its kind.

Because many chatbots use “modern” language models such as the one popularized by ChatGPT.

However, although they are capable of producing fluid conversations with human correspondents, these systems remain susceptible to “derailment” in the event of a software malfunction.

  • High-Tech

  • ChatGPT

  • Artificial Intelligence (AI)

  • Robot

  • Bug

  • Twitter (X)

  • By the Web