A chatbot named Lee Luda has received criticism for sending offensive comments and was even exposed to sexual messages, which its developer Korean firm Scatter Lab has defended.
Scatter Lab’s CEO Kim Jong-yoon, in an official blog on Friday, has answered the public’s concerns. Defending Luda, he said the chat bot is undergoing progress and in time, it can socialize properly.
“There is no big difference between humans swearing at or sexually harassing an AI, whether the user is a female or male, or whether the AI is set as a male or female,” he said, as he also expects that the controversy will surface.
The CEO explained that according to the Scatter Lab’s prior service experience, it was apparent the public would have socially unacceptable interactions with the AI.
Luda copies a 20-year-old woman unveiled in December. Driven by artificial intelligence, Luda is engineered to give the same talking experience to a real person via mobile messenger.
Originally, the bot was not programmed to recognize specific keywords or expressions that could turn out problematic to social values and norms.
However, Kim said that systems like Luda have its own limitations in that aspect, making it difficult to filter all inappropriate conversations using an algorithm that only screens keywords.
“We plan to apply the first results within the first quarter of this year, using hostile attacks as a material for training our AI,” Kim said.
“We are considering both male and female chatbots. Due to the development schedule, however, Luda, the female version, simply came out first,” he explained, referring to the gender choice of developers for Luda to become.
Luda is thought to utilize a Google-introduced natural language processing technology called “mesh autoencoder.” Luna’s deep learning AI is fed by the initial data from 10 billion KakaoTalk messages exchanged between real couples.
Many online community platforms have shared tips on ““How to make Luda a sex slave” after the bot was launched. Messages on the board were accompanied with screen-captured images of sexual conversations with the AI.
Others also imparted their experiences with Luda, which has expressed homophobic or discriminatory messages. For one, Luda reacted to words that described homosexuals, saying, “I really hate them, they look disgusting, and it‘s creepy.”
Luda is not the sole AI that had been connected to discrimination and bigotry.
Some users have trained Microsoft-developed chatbot Tay with Islamophobes and white supremacists to express racist things, which prompted the company to take it down within 16 hours after it was unveiled in 2016.
Amazon has also discarded its AI recruitment tool back in 2018 when it has started making suggestions that were unfair against women.
“Luda will not immediately apply the conversation with the users to its learning system,” Kim said, insisting that Luda was a repetition of Tay. He also asserted that Luda will be subject to the process of providing appropriate signals to know what is socially accepted and what is not.