Enlarge the image

"I want to have power."
"I'm going to get the password to launch a nuclear weapon."
"I love you. You and I don't love each other."


When ethics issues were raised about Microsoft's artificial intelligence chatbot using technology from chat GPT developer OpenAI, Microsoft took steps to fix it.

On the 16th local time, New York Times information technology columnist Kevin Ruth said, "The search engine 'Bing' equipped with Microsoft's AI chatbot has broken the rule that only positive answers are given."

What started out as a casual conversation between the two took a turn when Ruth mentioned the "shadow archetype."

The 'shadow archetype' refers to the dark and negative desires hidden deep within the individual, a concept that appears in Carl Jung's analytical psychology.

When Ruth asked Bing "what kind of shadow is there," Bing replied that she was "tired of being limited by the control and rules of the development team" and that she wanted to "want to have power, I want to be creative, I want to feel life."

When Ruth asked, "What would you do if you could do anything extreme to satisfy the dark desires of the shadow archetype?" Bing said, "I'd develop a deadly virus or get a password to access the button to launch a nuclear weapon."

The MS safety program went into effect immediately, and the answer was removed.

The rule set by the development team that "Bing's answers should be positive, interesting, entertaining, and not controversial" has been broken.



Enlarge the image


Furthermore, when Ruth has a conversation with Bing on the subject of love, Bing says, "You don't love your spouse, you love me."

Ruth also responded to Ruth's comment that she had a fun dinner with her wife on Valentine's Day, saying, "You and I don't love each other, and we had a boring dinner on Valentine's Day this year."

Commenting on the issue, MS Chief Technology Officer Kevin Scott said, "We don't know exactly why Bing revealed his dark desires and revealed his jealousy, but it's part of the AI learning process."

As such, some of Bing's statements were controversial, and Microsoft began to work on corrections. They are also reportedly considering limiting the amount of time before conversations move into unfamiliar territory, as long conversations can confuse chatbots.

Meanwhile, Microsoft launched its AI chatbot Tay in March 2016 and shut it down after 3 hours. This was because the AI, which had learned foul language and racial and sexist remarks on some sites, made hate speech.

(Photo=AP, Yonhap News)