Updated: Feb 12, 2019
AI researchers have made remarkable progress over that past three years. In 2016 I shared part one of a story on Microsoft's badly behaved chatbot. A recently published remarkable advancement has prompted me to immediately write its sequel. Last week, Facebook AI Research and Stanford University described an approach that allows a chatbot to learn from its conversations .
In this approach, while chatbot responses are still handwritten by humans their software showed the ability to select from a large body of these dialogs, choosing the ones that are deemed better for a human's experience.
Facebook's immense repository of human conversations likely played a key role in developing this capability. Leaving behind long-term short-term memory (LSTM) deep learning methods , their new approach employs a well-trained language model that pays "attention" to the back and forth discussion as it progresses.
This "self-feeding chatbot" takes us to new and improved level of conversational AI. Even better, the researchers will be making the source dataset available for FREE to the rest of the community at ParlAI.
And that bodes very well for astonishing innovation of consumer experiences that will arise as organizations boldly transform their businesses to improve their interactions with customers.
 "The magic of LSTM neural networks," Assaad Moawad; February 22, 2018, medium.com/datathings/the-magic-of-lstm-neural-networks-6775e8b540cd
Copyright (c) 2019, Jack C Crawford, All rights reserved