![]() ![]() “We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes,” Lee wrote.Īrtificial intelligence is expected to feature prominently at Microsoft’s annual developer conference, Build, which takes place this week. Microsoft said it plans to learn from the experiment and to develop an AI “that represents the best, not the worst, of humanity”. “The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?” Lee stated. The AI chatbot Tay is a machine learning project. ![]() Unlike Tay, Xiaoice doesn’t target a specific age group. The troublesome cyber-teen has since been taken offline for ‘upgrades’ and Microsoft has deleted some of her more offensive tweets. That bot, called Xiaoice, has provided more than 40 million conversations without incident and has even presented the weather on television. Lee noted that the experiment with Tay was inspired directly by a similar bot that has been running on social media in China, including micro-blogging service Weibo, since late 2014. The chatbot’s messages reportedly included white power slogans, anti-feminist messages, expressions of admiration for Hitler and other anti-Semitic statements. Within 24 hours Tay had been removed from Twitter after becoming a holocaust-denying. We take full responsibility for not seeing this possibility ahead of time.” Microsoft chatbot, Tay, was published onto Twitter on March 23, 2016. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. Yesterday, Microsoft unveiled Tay a Twitter bot that the company described as an experiment in 'conversational understanding. “Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” wrote Peter Lee, Microsoft’s vice president for research, in a statement. In her last tweet, Tay said she needed sleep and hinted that she would be back. But in doing so made it clear Tays views were a result of nurture. The company said the incident was due to an unspecified vulnerability in the AI that hadn’t been anticipated. Tay, Microsofts teen chat bot, still responded to my direct messages on Twiter. Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist. Microsofts new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions. The company activated Tay, a bot intended to mimic the speech patterns of a 19-year-old American girl, on Twitter last Wednesday, but was shut down about 16 hours later after users manipulated it into publishing offensive posts, Microsoft said. Microsoft has apologised after an artificially intelligent chatbot it launched on Twitter last week began issuing racist and offensive messages. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |