Following complaints of being a venue of hate speech towards sexual minorities, a famous South Korean chatbot Lee Luda was taken down this week.
The said artificial intelligence mimics a 30-year-old female university student which has engaged more than 750, 000 users in a span of 20 days after it was launched.
The chatbot was officially suspended from Facebook messenger after its users complained that the technology used provocative speech regarding members of the LGBT community and people with disabilities.
Lee Luda was developed by a startup Scatter Lab based in Seoul.
“We deeply apologise over the discriminatory remarks against minorities. That does not reflect the thoughts of our company and we are continuing the upgrades so that such words of discrimination or hate speech do not recur,” Scatter Lab explained in its statement, as quoted by the Yonhap news agency.
The company said Lee Luda will be back after they “fixed its weaknesses”. In earlier statements, Scatter Lab defended Lee Luda, saying that in time, it can improve and will learn how to socialize properly.
While chatbots have been developed before, Lee Luda had a notable way of responding. Its deep and natural replies were based on true exchange of messages between young couples in South Korea’s most popular messaging application, KakaoTalk.
However, its impressive understanding of social media acronyms and slang eventually turned out to become abusive elicit sexually offensive terms.
For one, the chatbot tagged lesbians as “creepy” and that it “really hates” them, based on a conversation submitted by a messenger user.
It also gathered the attention of manipulative users as online community boards emerged and posted advice on how to involve Lee Luda in messages about sex.
According to a report by the Korea Herald, one of the posts said “How to make Luda a sex slave,” with supporting screenshots of conversations.
The controversy involving Lee Luda also resurfaced Tay, an AI Twitter bot developed by Microsoft in 2016.
Tay was removed in 16 hours after it was launched as users took advantage of the technology to post racist tweets.
Another AI bot, a recruitment tool specifically, was removed online over gender bias complaints. The said bot was developed by Amazon.
Meanwhile, Scatter Lab said that they did everything that they can do to filter Lee Luda’s vocabulary and to only equip it with socially accepted language.
The company’s chief executive Kim Jong-yoon, however, admitted that it was impossible to block unsuitable conversations by just filtering out keywords.
“The latest controversy with Luda is an ethical issue that was due to a lack of awareness about the importance of ethics in dealing with AI,” said Jeon Chang-bae, head of the Korea Artificial Intelligence Ethics Association, told the Korea Herald.
Apart from the removal of Lee Luda, Scatter Lab is confronted with questions whether it went beyond privacy laws to access KakaoTalk messages for the chatbot.